Is <span style='color:red'>Big Data</span> for IC Design Too Big to Manage?
  Big data? Everyone’s doing it. It shows up now in biotech, finance, agriculture, education and transportation. Industries are letting it reshape the very nature of their business.  But what about semiconductors?  IC Manage Inc., a provider of design data and IP management software for chip companies, announced Wednesday (Dec. 13) the launch of its Big Data Labs.  Dean Drako, IC Manage CEO, described Big Data Labs as a “platform” on which his company hopes to “develop and customize new big-data-based design analytic tools” for customers.  In the big-data era, semiconductor companies are already designing ICs that go into data centers. The question, though, is if these chip designers use big data themselves. They already have tons of raw data — spit out by different EDA vendors’ tools.  But have electronics designers figured out a way to optimize and accelerate their chips with big data?  The simple answer is “not yet.”  Certainly, the semiconductor industry has been using data management software for several years. IC Manage has been offering tools to “keep large amounts of data safe and get it neatly organized so that it makes data accessible to others,” explained Laurie Balch, a chief analyst with Gary Smith EDA.  But as for the analytical tools that might enable IC designers to apply this data to intelligent decisions, “We are at a point, it’s only now that it’s become feasible,” Balch told us.  IC Manage isn’t a traditional EDA vendor. It makes no conventional EDA tools such as simulation, synthesis, or layout. Instead, the company’s specialty is in “EDA enterprise tools,” explained Balch. Describing it as “a company with a stronghold in the IC design database market,” she called IC Manage “the industry leader, by a long shot.”  At a time when “electronics design is known for a huge amount of data it’s creating,” she observed that chip vendors are wrestling with increasingly voluminous data. IC Manage might just become the first company to come up with a solution.  Unstructured data  Big data, by definition, consists of large unstructured data, explained Drako.  He acknowledged that the world of electronics design is already seeing a huge spike in unstructured data — coming independently from various tools designed by different EDA companies.  Most IC designers, however, aren’t equipped to absorb all this stuff, let alone make sense of it. It’s time-consuming and resource-intensive to do so.  Connecting the dots between such independent sets of data across tools and vendors is no easy feat, said Drako.  Furthermore, “there are only limited industry and company expertise and resources available” that can quickly derive actionable insights and create management options with implementation details, he added.  This is where IC Manage hopes to come in.  Drako explained that IC Manage has overlaid unstructured data on top of the organized design data. “By merging unstructured data (such as verification log files) and structured data (electronics design data), we are offering a hybrid database,” said Drako, which chip companies can use for running high-performance advanced EDA analytics.  The result that IC Manage hopes to achieve is a platform offering visual analytics that will help users create interactive reports.  Tape-out prediction  This isn’t IC Manage’s first foray into big-data design tools for IC vendors.  A few years ago, the company developed a big-data product called “Envision Design Progress Analytics.” The tool offered a foundation for IC Manage customers to accurately predict the tape-out of their new chips.  With the launch of Big Data Labs, though, IC Manage is taking it a few steps further. Organizing big data and making it accessible to everyone in a design team or a whole company isn’t enough.  By closely working with customers (chip companies) and partners (EDA tool vendors), IC Manage hopes to develop — and potentially customize — tools that will allow designers to keep track of every designer’s contributions, history of revisions, IP reuse, and any other actions. This will result in a tool allowing designers to see the impact of their decisions on the rest of the design process. Its analysis will help them make intelligent decisions, noted Drako.  Envision Verification  On Wednesday, IC Manage simultaneously announced the launch of its first verification analytics tool, “Envision Verification,” based on its Big Data Labs’ platform.  Leveraging the platform’s ability to link multi-vendor environments, the tool delivers “near-real-time visual analytics,” according to IC Manage.  “To understand everything that’s going on,” Drako said that Envision Verification pulls in all verification data from different EDA vendor environments — Verilog, Mentor and Cadence — and tracks design activity, regression tests and verification status, and bugs that emerge along the way. Then it identifies the changes.  Without such big-data verification, Drako said, “Traditionally, if you were a part of a 300-engineer team, you’d have to spend a lot of time asking around, ‘Did you change anything?’ ‘What’s tested?’ ‘Who broke it?’ ‘Are we missing anything?’ etc.”  With an interactive report of verification results in hand, “Envision can accelerate functional verification analytics by 10 to 100 times. You can identify not only bottlenecks but also root causes of problems that have emerged during a verification process,” said Drako.  Bug-tracking analytics is also part of the process. With Envision Verification, “an engineer can assign a ticket for further debug when a test fails or update the bug status to pass, fail, or needs investigation,” according to IC Manage.  Over time, Drako noted, “Development teams can immediately tie verification progress bottlenecks directly to design changes to quickly optimize their resources to accelerate tight schedules. Also, they should be able to do milestone estimations.”  The analyst Balch explained that verification is an “extraordinarily big challenge” for electronic designers. Because everyone aims for “first-time right” design and manufacture — due to the cost-prohibitive nature of chip redesign, “designers need to verify the heck out of it,” she said. Verification involves many test facets, she noted, “as outcomes change depending on operating conditions and you need to be cognizant of corner cases.”  The first tool coming out of IC Manage’s newly launched Big Data Labs is this functional verification tool. But what else does IC Manage have in the pipeline?  Drako hesitated to make predictions until his company is actually ready to reveal. Nonetheless, he said that other logical big-data analytics products would include physical verification, timing analysis and power.  Pointing out that there are so many parts to functional verification, including simulation and emulation of semiconductor, circuit, digital, and analog designs, Balch suspects that IC Manage will be busy for a long time developing further its functional verification tools — including customizations.  Who will use it?  There is no denying that the use of data-management tools has been somewhat “slow to take off” among chip companies, according to Balch. Given a budget, chip designers prefer to buy core design tools rather than big-data analytics tools. “They just don’t see it as vital. They also think it’s only for a big design team.”  With the semiconductor industry currently engulfed in mega-mergers, the landscape might be changing more rapidly than previously predicted, however. If Broadcom succeeds in buying Qualcomm, for example, imagine the data-management nightmare that will hit a huge number of design teams running inside the two giants. The merged company needs to monitor progress at different design teams, ensuring that design information and IPs are shared across the board.  IC Manage’s Envision Verification Analytics is available immediately as part of the IC Manage Envision product suite.
Key word:
Release time:2017-12-14 00:00 reading:1196 Continue reading>>
Algorithm Speeds GPU-based AI Training 10x on <span style='color:red'>Big Data</span> Sets
  IBM Zurich researchers have developed a generic artificial-intelligence preprocessing building block for accelerating Big Data machine learning algorithms by at least 10 times over existing methods. The approach, which IBM presented Monday (Dec. 4) at the Neural Information Processing Systems conference (NIPS 2017) in Long Beach, Calif., uses mathematical duality to cherry-pick the items in a Big Data stream that will make a difference, ignoring the rest.  “Our motivation was how to use hardware accelerators, such as GPUs [graphic processing units] and FPGAs [field-programmable gate arrays], when they do not have enough memory to hold all the data points” for Big Data machine learning, IBM Zurich collaborator Celestine Dünner, co-inventor of the algorithm, told EE Times in advance of the announcement.  “To the best of our knowledge, we are first to have generic solution with a 10x speedup,” said co-inventor Thomas Parnell, an IBM Zurich mathematician. “Specifically, for traditional, linear machine learning models — which are widely used for data sets that are too big for neural networks to train on — we have implemented the techniques on the best reference schemes and demonstrated a minimum of a 10x speedup.”  IBM Zurich researcher Martin Jaggi at ?cole Polytechnique Fédérale de Lausanne (EPFL), also contributed to the machine learning preprocessing algorithm.  For their initial demonstration, the researchers used a single Nvidia Quadro M4000 GPU with 8 gigabytes of memory training on a 30-Gbyte data set of 40,000 photos using a support vector machine (SVM) algorithm that resolves the images into classes for recognition. The SVM algorithm also creates a geometric interpretation of the model learned (unlike neural networks, which cannot justify their conclusions). IBM’s data preprocessing method enabled the algorithm to run in less than a one minute, a tenfold speedup over existing methods using limited-memory training.  The key to the technique is preprocessing each data point to see if it is the mathematical dual of a point already processed. If it is, then the algorithm just skips it, a process that becomes increasingly frequent as the data set is processed. “We calculate the importance of each data point before it is processed by measuring how big the duality gap is,” Dünner said.  “If you can fit your problem in the memory space of the accelerator, then running in-memory will achieve even better results,” Parnell told EE Times. “So our results apply only to Big Data problems. Not only will it speed up execution time by 10 times or more, but if you are running in the cloud, you won’t have to pay as much.”  As Big Data sets grow, such time- and money-saving preprocessing algorithms will become increasingly important, according to IBM. To show that its duality-based algorithm works with arbitrarily large data sets, the company showed an eight-GPU version at NIPS that handles a billion examples of click-through data for web ads.  The researchers are developing the algorithm further for deployment in IBM’s Cloud. It will be recommended for Big Data sets involving social media, online marketing, targeted advertising, finding patterns in telecom data, and fraud detection.  For details, read Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems, by Dünner, Parnell, and Jaggi.
Release time:2017-12-06 00:00 reading:1178 Continue reading>>
Beating IoT <span style='color:red'>Big Data</span> With Brain Emulation
  To beat Big Data, according to German electronics company Robert Bosch, we need to tier the solution by making every level smart — from edge sensors to concentration hubs to analytics in the cloud.  Luckily, we have the smart sensors of the brain — eye, ears, nose, taste-buds and touch sensitivity — as the smartest model in the universe (as we know it) after which to fashion our electronic Big Data solutions to the Internet of Things (IoT), said Marcellino Gemelli, head of business development at Bosch Sensortec.  "We need to feed our Big Data problems into a model generator based on the human brain, then use this model to generate a prediction of what the optimal solution will look like," Gemelli told the attendees at the recent SEMI MEMS & Sensor Executive Congress (MSEC). "These machine learning solutions will work on multiple levels, because of the versatility of the neuron."  Neurons are the microprocessors of the brain — accepting thousands of Big Data inputs, but outputting a single voltage spike down their axon after receiving the right kind of input from thousands of dendrites mediated by memory synapses. In this way the receptors of the eye, ear, nose, taste-buds and touch sensors (for presence, pressure and temperature, mainly) can pre-process the deluge of incoming raw Big Data before sending summaries — encoded on voltage spikes — up the spinal cord to the hub called the "old brain" (the brain stem and automatic behavior centers such as those handling breathing, heart beating and reflexes). Finally the pre-processed data makes its way through a vast interconnect array called the white matter to its final destination in the conscious parts of the brain (the gray matter of the cerebral cortex). Each part of the cerebral cortex is dedicated to a function like vision, speech, smelling, tasting, the sensations of touch as well as the cognitive functions of attention, reasoning, evaluation, judgement and consequential planning.  "The mathematical equivalent of the brain's neural network is the perceptron, which can learn with its variable conductance synapse while Big Data is streaming through it," said Gemelli. "And we can add multiple levels of perceptrons to learn everything a human can learn, such as all the different ways that people walk."  Moore's Law also helps out with multi-layered perceptrons — called deep learning — because it offers a universal way to do smart processing at the edge sensor, in the hub and during analytics in the cloud.  "First of all, volume helps — the more Big Data the better," said Gemelli. "Second, variety helps — learning all the different aspects of something, such as the mentioned different gaits people use to walk. And thirdly, the velocity at which a perceptron needs to respond needs to be quantified. Once you have these three parameters defined, you can optimize your neural network for any particular application."  For example, Gemelli said, a smartwatch/smartphone/smart cloud combination can divide-and-conquer Big Data. The smartwatch evaluates the real-time continuous data coming in from individual users, then sends the most important data in summaries to the smartphone every few minutes. Then just a few times a day, the smartphone can send trending summaries to the smart cloud. There the detailed analysis of the most important data points can be massaged in the cloud and fed back to the particular user wearing the smartwatch, as well as to other smartwatch wearers as appropriate suggestions of how anonymous others have met the same goals as they have set.  Bosch, itself, is emulating this three-tiered brain-like model by putting processors on its edge-sensors so they can identify and concentrate Big Data trending before transmitting to smart hubs.  "Smart cities, in particular, need to make use of smart sensors with built-in processors to perform the real-time edge sensor trending," said Gemelli. "Then they send those trends to hubs, that analyze and send the most important ones to the cloud for analysis into actionable information for city managers. That is Bosch's vision of the smart cities of the future."
Release time:2017-11-20 00:00 reading:1431 Continue reading>>
<span style='color:red'>Big Data</span> Algorithms, Languages Expand
  The buzz around big data is spawning new algorithms, programming languages, and techniques at the speed of software.  “Neural networks have been around for a long time. What’s new is the large amounts of data we have to run against them and the intensity of engineering around them,” said Inderpal Bhandari, a veteran computer scientist who was named IBM’s first chief data officer.  He described work using generative adversarial networks to pit two neural nets against each other to create a better one. “This is an engineering idea that leads to more algorithms — there is a lot of that kind of engineering around neural networks now.”  In some ways, the algorithms are anticipating tomorrow’s hardware. For example, quantum algorithms are becoming hot because they “allow you to do some of what quantum computers would do if they were available, and these algorithms are coming of age,” said Anthony Scriffignano, chief data scientist for Dun & Bradstreet.  Deep belief networks are another hot emerging approach. Scriffignano describes it as “a non-regressive way to modify your goals and objectives while you are still learning — as such, it has characteristics of tomorrow’s neuromorphic computers,” systems geared to mimic the human brain.  At Stanford, the DeepDive algorithms developed by Chris Ré have been getting traction. They help computers understand and use unstructured data like text, tables, and charts as easily as relational databases or spreadsheets, said Stephen Eglash, who heads the university’s data science initiative.  “Much of existing data is un- or semi-structured. For example, we can read a datasheet with ease, but it’s hard for a computer to make sense of it.”  So far, Deep Dive has helped oncologists use computers to interpret photos of tumors. It’s being used by the New York attorney general as a law enforcement tool. It’s also in use across a large number of companies working in different domains.  DeepDive is unique in part because “it IDs and labels everything and then uses learning engines and probabilistic techniques to figure out what they mean,” said Eglash.  While successful, the approach is just one of many algorithm efforts in academia these days. Others focus on areas such as computer vision or try to ID anomalies in real-time data streams. “We could go on and on,” said Eglash.
Key word:
Release time:2017-06-09 00:00 reading:1292 Continue reading>>
<span style='color:red'>Big Data</span> Makes Big Waves
  You could say that big data got its start when Sergy Brin and Larry Page helped develop an algorithm that found more relevant results on the web than the search engines of their rivals. The lesson of Google continues to ripple through all businesses seeking competitive insights from their data pools, however large or small.  Today, the Internet of Things is opening vast new data sources, expanding big data’s promise to reshape business, technology, and the job of the technologist. Along the way, big data is inspiring new kinds of processor and systems architectures, as well as evolving algorithms and programming techniques.  “The concept of being overwhelmed by data is the new normal,” said Anthony Scriffignano, chief data scientist of Dun & Bradstreet, at a recent event hosted by the Churchill Club.  Inderpal Bhandari, the first chief data officer of IBM, also spoke at the event. The goal of the new role is to “change major processes an enterprise has so that their outcomes are better, so faster and better decisions get made,” said Bhandari.  Some of the largest recent IPOs in tech are being fueled by big data. They include Cloudera and Hortonworks, who helped drive Hadoop, an open-source equivalent of Google’s core MapReduce algorithm.  At Stanford’s Data Science Initiative, researchers are working the big-data techniques in the hands of the average company.  “Machine learning is impressive but really hard to use. Even the most sophisticated companies might only have a couple of people that can apply those techniques optimally,” said Stephen Eglash, executive director of the initiative. “I can imagine the day when these tools are available in the equivalent of Microsoft Office.”  To get there, Stanford researchers are developing Snorkel, a tool to automate the process of labeling and ingesting big data sets. “It’s far enough along that you can see that it will work,” said Eglash. “We want the domain experts to use these techniques without needing a computer science expert.”  The IEEE Big Data Initiative is taking a different approach, making large data sets freely available for research through its Dataport service. So far, they include examples as diverse as real-time feeds of New York City traffic and neuron movements in a human brain.  Commercial big data projects are just as diverse, says Wayne Thompson, chief data scientist at SAS, a data analytics pioneer founded in 1976. “We are working with a semiconductor company to help reduce defects in their chip fab process through improved computer vision. One of our development partners is applying deep learning to help improve soccer players’ performance. We also are applying deep learning to monitor and count endangered wildlife through footprint image analysis and tracking/”  Smaller companies are getting traction, too. Although it has just 150 people, Real-Time Innovations Inc. (RTI) claims more than 1,000 design wins for its novel databus software for real-time monitoring. It uses a subscribe-and-publish model for tracking nodes, typically on sensor networks.  One of its first big users was a middleware server installed in the U.S.S. Cole after it was bombed in the Middle East. The software is also used in many hydroelectric plants, including Grand Coulee Dam, hospital instruments built by GE Healthcare, and wind turbine farms operated by Siemens.  The company recently named former Sun Microsystems co-founder, Scott McNealy, to an advisory board that will help it scale. RTI’s business “is the next evolution of what we used to describe as ‘the network is the computer,’” said McNealy. “Today, the network is also the power grid and a lot of other things.”
Key word:
Release time:2017-06-09 00:00 reading:1217 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
MC33074DR2G onsemi
BD71847AMWV-E2 ROHM Semiconductor
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
model brand To snap up
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
ESR03EZPJ151 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code